Write a short description about the course and add a link to your GitHub repository here. This is an R Markdown (.Rmd) file so you can use R Markdown syntax.
I’m feeling excited to start this new course. I am expecting to learn how to use R and apply it for my studies in social psychology. I think with the current amount of data that is collected everyday, social sciences as a whole has great chances of improving. I first knew about this course after contacting Kimmo, as recommended by the coordinator of my master’s programme.
Describe the work you have done this week and summarize your learning.
lrn14 <- read.table("data/learning2014.txt", header = TRUE, sep = "\t")
str(lrn14)
## 'data.frame': 166 obs. of 7 variables:
## $ gender : Factor w/ 2 levels "F","M": 1 2 1 2 2 1 2 1 2 1 ...
## $ age : int 53 55 49 53 49 38 50 37 37 42 ...
## $ attitude: int 37 31 25 35 37 38 35 29 38 21 ...
## $ deep : num 3.58 2.92 3.5 3.5 3.67 ...
## $ stra : num 3.38 2.75 3.62 3.12 3.62 ...
## $ surf : num 2.58 3.17 2.25 2.25 2.83 ...
## $ points : int 25 12 24 10 22 21 21 31 24 26 ...
dim(lrn14)
## [1] 166 7
As we can see, we have here a data frame imported as lrn14. The data reflects the relationship between learning approaches and the students achievements. This dataset includes 166 observations with 7 variables each. The variables are gender, age, attitude, average of the deep approach, average of the strategic approach, average of the surface approachs and the points obtained in the exam.
pairs(lrn14[-1])
summary(lrn14)
## gender age attitude deep stra
## F:110 Min. :17.00 Min. :14.00 Min. :1.583 Min. :1.250
## M: 56 1st Qu.:21.00 1st Qu.:26.00 1st Qu.:3.333 1st Qu.:2.625
## Median :22.00 Median :32.00 Median :3.667 Median :3.188
## Mean :25.51 Mean :31.43 Mean :3.680 Mean :3.121
## 3rd Qu.:27.00 3rd Qu.:37.00 3rd Qu.:4.083 3rd Qu.:3.625
## Max. :55.00 Max. :50.00 Max. :4.917 Max. :5.000
## surf points
## Min. :1.583 Min. : 7.00
## 1st Qu.:2.417 1st Qu.:19.00
## Median :2.833 Median :23.00
## Mean :2.787 Mean :22.72
## 3rd Qu.:3.167 3rd Qu.:27.75
## Max. :4.333 Max. :33.00
In the first representation we used the pairs() function to show the distribution of the different variables as a function of the others.
reg_model <- lm(points ~ attitude + stra + surf, data = lrn14)
summary(reg_model)
##
## Call:
## lm(formula = points ~ attitude + stra + surf, data = lrn14)
##
## Residuals:
## Min 1Q Median 3Q Max
## -17.1550 -3.4346 0.5156 3.6401 10.8952
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 11.01711 3.68375 2.991 0.00322 **
## attitude 0.33952 0.05741 5.913 1.93e-08 ***
## stra 0.85313 0.54159 1.575 0.11716
## surf -0.58607 0.80138 -0.731 0.46563
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 5.296 on 162 degrees of freedom
## Multiple R-squared: 0.2074, Adjusted R-squared: 0.1927
## F-statistic: 14.13 on 3 and 162 DF, p-value: 3.156e-08
We can observe that in this multiple regression analysis we obtain a p-value that is very close to 0. This means that in this model, our explanatory variables are very likely to explain the dependent variable.
The R-Squared represents the proportion of the variance of the points that is explained by the explanatory variables. In this case, our model explains approximately a 20% of the variance of the points.
par(mfrow = c(2,2))
plot(reg_model, which = c(1,2,5))
We can observe from some of the plots that the model represents a good fit. As an example, in the QQ-Plot, the great majority of the values fit very close to the line.
On the other hand, in the plot between the Residuals and the Fitted values, we can’t observe a major disperstion of the results as the fitted values increased.
For this exercise on logistic regression, we will be will be working on data from two questionnaires related to student performance. This data approaches student achievement in secondary education of two Portuguese schools. The data attributes include student grades, demographic, social and school related features) and it was collected by using school reports and questionnaires From this questionnaires, we are particularly interested in the data related to alcohol consumption.
We will use a prepared data extracted from the previously mentioned data set. We modified it to obtain information relevant for our analysis on alcohol consumption. We will start reading the table, storing it into the variable “alc” and showing the names of the different columns.
alc <- read.table("data/alc.csv", header = TRUE, sep = "\t")
colnames(alc)
## [1] "school" "sex" "age" "address" "famsize"
## [6] "Pstatus" "Medu" "Fedu" "Mjob" "Fjob"
## [11] "reason" "nursery" "internet" "guardian" "traveltime"
## [16] "studytime" "failures" "schoolsup" "famsup" "paid"
## [21] "activities" "higher" "romantic" "famrel" "freetime"
## [26] "goout" "Dalc" "Walc" "health" "absences"
## [31] "G1" "G2" "G3" "alc_use" "high_use"
For our analysis we will choose the variables sex, Medu, Fedu and activities. We picked the mother’s education and the father’s education (Medu and Fedu) as education has been shown to play an important role in very different aspects of life and health. Finally, we will pick the absences variable as we assume it could be linked to high alcohol consumption.
library(tidyr)
library(dplyr)
##
## Attaching package: 'dplyr'
## The following objects are masked from 'package:stats':
##
## filter, lag
## The following objects are masked from 'package:base':
##
## intersect, setdiff, setequal, union
library(ggplot2)
library(GGally)
## Registered S3 method overwritten by 'GGally':
## method from
## +.gg ggplot2
##
## Attaching package: 'GGally'
## The following object is masked from 'package:dplyr':
##
## nasa
#We isolate the selected variables according to our hypothesis
pick <- c("high_use","sex","Medu","Fedu","absences")
alc_pick <- select(alc, one_of(pick))
#We create some plots with the variables
str(alc_pick)
## 'data.frame': 382 obs. of 5 variables:
## $ high_use: logi FALSE FALSE TRUE FALSE FALSE FALSE ...
## $ sex : Factor w/ 2 levels "F","M": 1 1 1 1 1 2 2 1 2 2 ...
## $ Medu : int 4 1 1 4 3 4 2 4 3 3 ...
## $ Fedu : int 4 1 1 2 3 3 2 4 2 4 ...
## $ absences: int 5 3 8 1 2 8 0 4 0 0 ...
gather(alc_pick) %>% glimpse()
## Warning: attributes are not identical across measure variables;
## they will be dropped
## Observations: 1,910
## Variables: 2
## $ key <chr> "high_use", "high_use", "high_use", "high_use", "high_use", "hi…
## $ value <chr> "FALSE", "FALSE", "TRUE", "FALSE", "FALSE", "FALSE", "FALSE", "…
gather(alc_pick) %>% ggplot(aes(value)) + facet_wrap("key", scales="free") + geom_bar()
## Warning: attributes are not identical across measure variables;
## they will be dropped
p1 <- ggplot(alc_pick, aes(sex, fill=high_use))
p2 <- ggplot(alc_pick, aes(Medu, fill=high_use))
p3 <- ggplot(alc_pick, aes(Fedu, fill=high_use))
p4 <- ggplot(alc_pick, aes(x=high_use, y=absences))
p1 + geom_bar()
p2 + geom_bar()
p3 + geom_bar()
p4 + geom_boxplot()
table(alc_pick$high_use, alc_pick$sex)
##
## F M
## FALSE 156 112
## TRUE 42 72
table(alc_pick$high_use, alc_pick$Medu)
##
## 0 1 2 3 4
## FALSE 1 33 80 59 95
## TRUE 2 18 18 36 40
table(alc_pick$high_use, alc_pick$Fedu)
##
## 0 1 2 3 4
## FALSE 2 53 75 72 66
## TRUE 0 24 30 27 33
table(alc_pick$high_use, alc_pick$absences)
##
## 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 16 17 18 19 20 21 26 27 29
## FALSE 52 38 42 33 24 16 16 9 14 6 5 2 4 1 1 0 0 1 0 2 1 0 0 0
## TRUE 13 13 16 8 12 6 5 3 6 6 2 4 4 1 6 1 1 1 1 0 1 1 1 1
##
## 44 45
## FALSE 0 1
## TRUE 1 0
For what we can see in this initial explorations, the sex variable seem to have a some relevance. Fedu (father’s education) shows similar results in both groups. However, Medu (mother’s education) and absences seem to have some indications of influence on alcohol consumption on young people.
m <- glm(data=alc_pick, high_use ~ sex + Medu + Fedu + absences, family="binomial")
summary(m)
##
## Call:
## glm(formula = high_use ~ sex + Medu + Fedu + absences, family = "binomial",
## data = alc_pick)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -2.2692 -0.8616 -0.6186 1.0873 2.0814
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -1.72690 0.37465 -4.609 4.04e-06 ***
## sexM 1.00307 0.24239 4.138 3.50e-05 ***
## Medu -0.13932 0.14599 -0.954 0.340
## Fedu 0.10037 0.14230 0.705 0.481
## absences 0.09866 0.02327 4.240 2.23e-05 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 465.68 on 381 degrees of freedom
## Residual deviance: 429.14 on 377 degrees of freedom
## AIC: 439.14
##
## Number of Fisher Scoring iterations: 4
The results show that just sex is a relevant variable from the ones we chose. We will build a new model with just the sex variable.
m2 <- glm(data=alc_pick, high_use ~ sex + absences, family="binomial")
summary(m2)
##
## Call:
## glm(formula = high_use ~ sex + absences, family = "binomial",
## data = alc_pick)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -2.2753 -0.8753 -0.6081 1.0921 1.9920
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -1.83606 0.22251 -8.252 < 2e-16 ***
## sexM 0.97762 0.23982 4.076 4.57e-05 ***
## absences 0.09659 0.02306 4.189 2.80e-05 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 465.68 on 381 degrees of freedom
## Residual deviance: 430.07 on 379 degrees of freedom
## AIC: 436.07
##
## Number of Fisher Scoring iterations: 4
OR <- coef(m2) %>% exp
CI <- confint(m2) %>% exp
## Waiting for profiling to be done...
cbind(OR, CI)
## OR 2.5 % 97.5 %
## (Intercept) 0.159445 0.1012577 0.2427684
## sexM 2.658116 1.6710354 4.2863129
## absences 1.101409 1.0549317 1.1548057
According to these results, males are almost 2.6 times more likely to have a high alcohol consumption than females. Absences are also linked. the data showing that high alcohol consumption probabilities grows as absences increase.
prob <- predict(m2, type="response")
alc_pick <- mutate (alc_pick, probs = prob)
alc_pick <- mutate(alc_pick, prediction = probs > .5)
table(high_use = alc_pick$high_use, prediction = alc_pick$prediction)
## prediction
## high_use FALSE TRUE
## FALSE 258 10
## TRUE 88 26
# plot
ggplot(alc_pick, aes(x = probs, y = high_use, col = prediction)) + geom_point()
data("Boston")
str(Boston)
## 'data.frame': 506 obs. of 14 variables:
## $ crim : num 0.00632 0.02731 0.02729 0.03237 0.06905 ...
## $ zn : num 18 0 0 0 0 0 12.5 12.5 12.5 12.5 ...
## $ indus : num 2.31 7.07 7.07 2.18 2.18 2.18 7.87 7.87 7.87 7.87 ...
## $ chas : int 0 0 0 0 0 0 0 0 0 0 ...
## $ nox : num 0.538 0.469 0.469 0.458 0.458 0.458 0.524 0.524 0.524 0.524 ...
## $ rm : num 6.58 6.42 7.18 7 7.15 ...
## $ age : num 65.2 78.9 61.1 45.8 54.2 58.7 66.6 96.1 100 85.9 ...
## $ dis : num 4.09 4.97 4.97 6.06 6.06 ...
## $ rad : int 1 2 2 3 3 3 5 5 5 5 ...
## $ tax : num 296 242 242 222 222 222 311 311 311 311 ...
## $ ptratio: num 15.3 17.8 17.8 18.7 18.7 18.7 15.2 15.2 15.2 15.2 ...
## $ black : num 397 397 393 395 397 ...
## $ lstat : num 4.98 9.14 4.03 2.94 5.33 ...
## $ medv : num 24 21.6 34.7 33.4 36.2 28.7 22.9 27.1 16.5 18.9 ...
dim(Boston)
## [1] 506 14
The Boston dataset contains 506 rows and 14 columns of information relevant to the housing values in the suburbs of Boston.
#We show a summary of the dataset
summary(Boston)
## crim zn indus chas
## Min. : 0.00632 Min. : 0.00 Min. : 0.46 Min. :0.00000
## 1st Qu.: 0.08204 1st Qu.: 0.00 1st Qu.: 5.19 1st Qu.:0.00000
## Median : 0.25651 Median : 0.00 Median : 9.69 Median :0.00000
## Mean : 3.61352 Mean : 11.36 Mean :11.14 Mean :0.06917
## 3rd Qu.: 3.67708 3rd Qu.: 12.50 3rd Qu.:18.10 3rd Qu.:0.00000
## Max. :88.97620 Max. :100.00 Max. :27.74 Max. :1.00000
## nox rm age dis
## Min. :0.3850 Min. :3.561 Min. : 2.90 Min. : 1.130
## 1st Qu.:0.4490 1st Qu.:5.886 1st Qu.: 45.02 1st Qu.: 2.100
## Median :0.5380 Median :6.208 Median : 77.50 Median : 3.207
## Mean :0.5547 Mean :6.285 Mean : 68.57 Mean : 3.795
## 3rd Qu.:0.6240 3rd Qu.:6.623 3rd Qu.: 94.08 3rd Qu.: 5.188
## Max. :0.8710 Max. :8.780 Max. :100.00 Max. :12.127
## rad tax ptratio black
## Min. : 1.000 Min. :187.0 Min. :12.60 Min. : 0.32
## 1st Qu.: 4.000 1st Qu.:279.0 1st Qu.:17.40 1st Qu.:375.38
## Median : 5.000 Median :330.0 Median :19.05 Median :391.44
## Mean : 9.549 Mean :408.2 Mean :18.46 Mean :356.67
## 3rd Qu.:24.000 3rd Qu.:666.0 3rd Qu.:20.20 3rd Qu.:396.23
## Max. :24.000 Max. :711.0 Max. :22.00 Max. :396.90
## lstat medv
## Min. : 1.73 Min. : 5.00
## 1st Qu.: 6.95 1st Qu.:17.02
## Median :11.36 Median :21.20
## Mean :12.65 Mean :22.53
## 3rd Qu.:16.95 3rd Qu.:25.00
## Max. :37.97 Max. :50.00
#We create a correlation matrix and store it
cor_matrix<-cor(Boston)
cor_matrix %>% round(digits = 2)
## crim zn indus chas nox rm age dis rad tax ptratio
## crim 1.00 -0.20 0.41 -0.06 0.42 -0.22 0.35 -0.38 0.63 0.58 0.29
## zn -0.20 1.00 -0.53 -0.04 -0.52 0.31 -0.57 0.66 -0.31 -0.31 -0.39
## indus 0.41 -0.53 1.00 0.06 0.76 -0.39 0.64 -0.71 0.60 0.72 0.38
## chas -0.06 -0.04 0.06 1.00 0.09 0.09 0.09 -0.10 -0.01 -0.04 -0.12
## nox 0.42 -0.52 0.76 0.09 1.00 -0.30 0.73 -0.77 0.61 0.67 0.19
## rm -0.22 0.31 -0.39 0.09 -0.30 1.00 -0.24 0.21 -0.21 -0.29 -0.36
## age 0.35 -0.57 0.64 0.09 0.73 -0.24 1.00 -0.75 0.46 0.51 0.26
## dis -0.38 0.66 -0.71 -0.10 -0.77 0.21 -0.75 1.00 -0.49 -0.53 -0.23
## rad 0.63 -0.31 0.60 -0.01 0.61 -0.21 0.46 -0.49 1.00 0.91 0.46
## tax 0.58 -0.31 0.72 -0.04 0.67 -0.29 0.51 -0.53 0.91 1.00 0.46
## ptratio 0.29 -0.39 0.38 -0.12 0.19 -0.36 0.26 -0.23 0.46 0.46 1.00
## black -0.39 0.18 -0.36 0.05 -0.38 0.13 -0.27 0.29 -0.44 -0.44 -0.18
## lstat 0.46 -0.41 0.60 -0.05 0.59 -0.61 0.60 -0.50 0.49 0.54 0.37
## medv -0.39 0.36 -0.48 0.18 -0.43 0.70 -0.38 0.25 -0.38 -0.47 -0.51
## black lstat medv
## crim -0.39 0.46 -0.39
## zn 0.18 -0.41 0.36
## indus -0.36 0.60 -0.48
## chas 0.05 -0.05 0.18
## nox -0.38 0.59 -0.43
## rm 0.13 -0.61 0.70
## age -0.27 0.60 -0.38
## dis 0.29 -0.50 0.25
## rad -0.44 0.49 -0.38
## tax -0.44 0.54 -0.47
## ptratio -0.18 0.37 -0.51
## black 1.00 -0.37 0.33
## lstat -0.37 1.00 -0.74
## medv 0.33 -0.74 1.00
#We create a correlation plot
corrplot(cor_matrix, method="circle", type = "upper", cl.pos = "b", tl.pos = "d", tl.cex = 0.6)
#We create a distribution plot
Boston %>% gather() %>% ggplot(aes(value)) + facet_wrap(~ key, scales = "free") + geom_density(colour="red")
We can easily observe thanks to the correlation plot the different interactions between the elements of the dataset. We also see in distribution graphics how the different variables do not represent a normal distribution.
#We center and standarized the variables
boston_scaled <- scale(Boston)
#We show a summary of the scaled variables
summary(boston_scaled)
## crim zn indus chas
## Min. :-0.419367 Min. :-0.48724 Min. :-1.5563 Min. :-0.2723
## 1st Qu.:-0.410563 1st Qu.:-0.48724 1st Qu.:-0.8668 1st Qu.:-0.2723
## Median :-0.390280 Median :-0.48724 Median :-0.2109 Median :-0.2723
## Mean : 0.000000 Mean : 0.00000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.007389 3rd Qu.: 0.04872 3rd Qu.: 1.0150 3rd Qu.:-0.2723
## Max. : 9.924110 Max. : 3.80047 Max. : 2.4202 Max. : 3.6648
## nox rm age dis
## Min. :-1.4644 Min. :-3.8764 Min. :-2.3331 Min. :-1.2658
## 1st Qu.:-0.9121 1st Qu.:-0.5681 1st Qu.:-0.8366 1st Qu.:-0.8049
## Median :-0.1441 Median :-0.1084 Median : 0.3171 Median :-0.2790
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.5981 3rd Qu.: 0.4823 3rd Qu.: 0.9059 3rd Qu.: 0.6617
## Max. : 2.7296 Max. : 3.5515 Max. : 1.1164 Max. : 3.9566
## rad tax ptratio black
## Min. :-0.9819 Min. :-1.3127 Min. :-2.7047 Min. :-3.9033
## 1st Qu.:-0.6373 1st Qu.:-0.7668 1st Qu.:-0.4876 1st Qu.: 0.2049
## Median :-0.5225 Median :-0.4642 Median : 0.2746 Median : 0.3808
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 1.6596 3rd Qu.: 1.5294 3rd Qu.: 0.8058 3rd Qu.: 0.4332
## Max. : 1.6596 Max. : 1.7964 Max. : 1.6372 Max. : 0.4406
## lstat medv
## Min. :-1.5296 Min. :-1.9063
## 1st Qu.:-0.7986 1st Qu.:-0.5989
## Median :-0.1811 Median :-0.1449
## Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.6024 3rd Qu.: 0.2683
## Max. : 3.5453 Max. : 2.9865
# We show the class of the boston_scaled object
class(boston_scaled)
## [1] "matrix"
#We change the object to data frame to work later with it
boston_scaled <- as.data.frame(boston_scaled)
bins <- quantile(boston_scaled$crim)
#We create a categorical variable of the crime rate in the Boston dataset (from the scaled crime rate).
crime <- cut(boston_scaled$crim, breaks = bins, include.lowest = TRUE, label = c("low","med_low","med_high","high"))
#We remove the old crime rate variable from the dataset
boston_scaled <- dplyr::select(boston_scaled, -crim)
boston_scaled <- data.frame(boston_scaled, crime)
#We divide the dataset to train and test sets, so that 80% of the data belongs to the train set.
n <- nrow(boston_scaled)
#We choose randomly 80% of the rows
ind <- sample(n, size = n * 0.8)
#We create train set
train <- boston_scaled[ind,]
#We create test set
test <- boston_scaled[-ind,]
#We use the categorical crime rate as the target variable and all the other variables in the dataset as predictor variables
lda.fit <- lda(crime ~ ., data = train)
# the function for lda biplot arrows
lda.arrows <- function(x, myscale = 1, arrow_heads = 0.1, color = "red", tex = 0.75, choices = c(1,2)){
heads <- coef(x)
arrows(x0 = 0, y0 = 0,
x1 = myscale * heads[,choices[1]],
y1 = myscale * heads[,choices[2]], col=color, length = arrow_heads)
text(myscale * heads[,choices], labels = row.names(heads),
cex = tex, col=color, pos=3)
}
# target classes as numeric
classes <- as.numeric(train$crime)
# We draw the plot with the lda results
plot(lda.fit, dimen = 2, col = classes, pch = classes)
lda.arrows(lda.fit, myscale = 1)
#We save the correct classes from test data
correct_classes <- test$crime
#We remove the crime variable from test data
test <- dplyr::select(test, -crime)
#We predict classes with test data
lda.pred <- predict(lda.fit, newdata = test)
#We cross tabulate the results
table(correct = correct_classes, predicted = lda.pred$class)
## predicted
## correct low med_low med_high high
## low 14 13 1 0
## med_low 2 13 8 0
## med_high 1 4 19 1
## high 0 0 0 26
As we can observe from the cross tabulation, our model predicts best the category of high. Second, the category of low is also high enough. Finally, the med_low and med_high categories have lower accuracy of predictions, even though the majority is still correct.
#We load the data and standarize it
data(Boston)
boston_scaled <- scale(Boston)
#We calculate the distances with the Eucidean distance
dist(boston_scaled)
#We run k-means algorithm on the dataset.
km <-kmeans(boston_scaled, centers = 2)
#We visualize the clusters
pairs(boston_scaled, col = km$cluster)
From the options we can choose, 2 centers seem to be the best representation of the results, as we can see clearly differentiated clusters with such number. With more centers, the clusters are not separated properly and they tend to occupy the same space of our plots.
library(plotly)
##
## Attaching package: 'plotly'
## The following object is masked from 'package:MASS':
##
## select
## The following object is masked from 'package:ggplot2':
##
## last_plot
## The following object is masked from 'package:stats':
##
## filter
## The following object is masked from 'package:graphics':
##
## layout
model_predictors <- dplyr::select(train, -crime)
# check the dimensions
dim(model_predictors)
## [1] 404 13
dim(lda.fit$scaling)
## [1] 13 3
# matrix multiplication
matrix_product <- as.matrix(model_predictors) %*% lda.fit$scaling
matrix_product <- as.data.frame(matrix_product)
plot_ly(x = matrix_product$LD1, y = matrix_product$LD2, z = matrix_product$LD3, type= 'scatter3d', mode='markers', color = classes)
library(corrplot)
library(dplyr)
library(ggplot2)
library(tidyr)
library(GGally)
human <- read.csv(file ="~/IODS-project/data/human.csv", row.names = 1)
summary(human)
## edu2FM labFM lifeExp educationExp
## Min. :0.1717 Min. :0.1857 Min. :49.00 Min. : 5.40
## 1st Qu.:0.7264 1st Qu.:0.5984 1st Qu.:66.30 1st Qu.:11.25
## Median :0.9375 Median :0.7535 Median :74.20 Median :13.50
## Mean :0.8529 Mean :0.7074 Mean :71.65 Mean :13.18
## 3rd Qu.:0.9968 3rd Qu.:0.8535 3rd Qu.:77.25 3rd Qu.:15.20
## Max. :1.4967 Max. :1.0380 Max. :83.50 Max. :20.20
## gni matMor birth repParl
## Min. : 581 Min. : 1.0 Min. : 0.60 Min. : 0.00
## 1st Qu.: 4198 1st Qu.: 11.5 1st Qu.: 12.65 1st Qu.:12.40
## Median : 12040 Median : 49.0 Median : 33.60 Median :19.30
## Mean : 17628 Mean : 149.1 Mean : 47.16 Mean :20.91
## 3rd Qu.: 24512 3rd Qu.: 190.0 3rd Qu.: 71.95 3rd Qu.:27.95
## Max. :123124 Max. :1100.0 Max. :204.80 Max. :57.50
ggplot(gather(human), aes(value)) + facet_wrap(~ key, scales = "free") + geom_density(fill="#FF9999", colour="black")
ggpairs(human)
cor(human) %>% corrplot(type = "upper")
We have eight numeric variables. In the graphical representations, we can see that in general the different values show normal distributions, accumulating the majority of the cases around a central point. We observe how there are some negative and positive relations between different values, like adolescent birth rate and maternal mortality ratio.
PCA on the not standardized human data
pca_human <- prcomp(human)
biplot(pca_human, choices = 1:2, cex = c(0.8, 1), col = c("grey40", "deeppink2"))
## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length =
## arrow.len): zero-length arrow is of indeterminate angle and so skipped
## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length =
## arrow.len): zero-length arrow is of indeterminate angle and so skipped
## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length =
## arrow.len): zero-length arrow is of indeterminate angle and so skipped
## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length =
## arrow.len): zero-length arrow is of indeterminate angle and so skipped
## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length =
## arrow.len): zero-length arrow is of indeterminate angle and so skipped
Because the PCA is supposed to be done with standardized values, we get some warnings. The results we obtain are related to the scale of the values. Higher scales (like in the case of the GNI index that we modified) results in higher relevance in the plot.
# We standardized the variables
human_std <- scale(human)
pca_humanstd <- prcomp(human_std)
s <- summary(pca_humanstd)
s
## Importance of components:
## PC1 PC2 PC3 PC4 PC5 PC6 PC7
## Standard deviation 2.0708 1.1397 0.87505 0.77886 0.66196 0.53631 0.45900
## Proportion of Variance 0.5361 0.1624 0.09571 0.07583 0.05477 0.03595 0.02634
## Cumulative Proportion 0.5361 0.6984 0.79413 0.86996 0.92473 0.96069 0.98702
## PC8
## Standard deviation 0.32224
## Proportion of Variance 0.01298
## Cumulative Proportion 1.00000
pca_pr <- round(100*s$importance[2, ], digits = 1)
pc_lab <- paste0(names(pca_pr), " (", pca_pr, "%)")
biplot(pca_humanstd, cex = c(0.8, 1), col = c("grey40", "red"), xlab = pc_lab[1], ylab = pc_lab[2])
As we mentioned, it was impossible to observe useful information from the biplot with the non-standardized values. However, with this new biplot, we are able to see clearly some information. We observe, for example, how the first Principal Component, situated in the X axis, correposponds to the 53.6% of the total variance, while the second Principal Component, situated in the Y axis, correspondes to the 16,2% of the total variance in the original variables.
We can also observe how the original variables are easily distributed in two directions (almost in parallel to the axis). For example, repPar and labFM are in the same direction than the PC2, while the other variables are in the direction of the PC1. We also see how the angles between the variables have for the most part small angles between each, which demonstrates a high correlation (this correlation can be positive or negative).
Being more specific, we can observe how the maternal mortality is highly correlated with the adolescent birth rate. We also observe how the ratio of female workers over men is linked with the number of female parliamentary representatives. And we can conclude also that these two variables have no correlation with the maternal mortality or the adolescent birth rate. Finally, we can observe how the life expectancy, the expected education levels, the GNI ratio or the ratio of female over male education have a high degree of positive correlation with each other and have a strong negative correlation with the adolescent birth rate and maternal mortality rate.
The first Principal Component seems to represent a value of the general health and life conditions and gender equality in the domestic sphere.
On the other hand, the second Principal Component seems to represent the integration of women in the public sphere (politics and jobs).
library(FactoMineR)
data(tea)
str(tea)
## 'data.frame': 300 obs. of 36 variables:
## $ breakfast : Factor w/ 2 levels "breakfast","Not.breakfast": 1 1 2 2 1 2 1 2 1 1 ...
## $ tea.time : Factor w/ 2 levels "Not.tea time",..: 1 1 2 1 1 1 2 2 2 1 ...
## $ evening : Factor w/ 2 levels "evening","Not.evening": 2 2 1 2 1 2 2 1 2 1 ...
## $ lunch : Factor w/ 2 levels "lunch","Not.lunch": 2 2 2 2 2 2 2 2 2 2 ...
## $ dinner : Factor w/ 2 levels "dinner","Not.dinner": 2 2 1 1 2 1 2 2 2 2 ...
## $ always : Factor w/ 2 levels "always","Not.always": 2 2 2 2 1 2 2 2 2 2 ...
## $ home : Factor w/ 2 levels "home","Not.home": 1 1 1 1 1 1 1 1 1 1 ...
## $ work : Factor w/ 2 levels "Not.work","work": 1 1 2 1 1 1 1 1 1 1 ...
## $ tearoom : Factor w/ 2 levels "Not.tearoom",..: 1 1 1 1 1 1 1 1 1 2 ...
## $ friends : Factor w/ 2 levels "friends","Not.friends": 2 2 1 2 2 2 1 2 2 2 ...
## $ resto : Factor w/ 2 levels "Not.resto","resto": 1 1 2 1 1 1 1 1 1 1 ...
## $ pub : Factor w/ 2 levels "Not.pub","pub": 1 1 1 1 1 1 1 1 1 1 ...
## $ Tea : Factor w/ 3 levels "black","Earl Grey",..: 1 1 2 2 2 2 2 1 2 1 ...
## $ How : Factor w/ 4 levels "alone","lemon",..: 1 3 1 1 1 1 1 3 3 1 ...
## $ sugar : Factor w/ 2 levels "No.sugar","sugar": 2 1 1 2 1 1 1 1 1 1 ...
## $ how : Factor w/ 3 levels "tea bag","tea bag+unpackaged",..: 1 1 1 1 1 1 1 1 2 2 ...
## $ where : Factor w/ 3 levels "chain store",..: 1 1 1 1 1 1 1 1 2 2 ...
## $ price : Factor w/ 6 levels "p_branded","p_cheap",..: 4 6 6 6 6 3 6 6 5 5 ...
## $ age : int 39 45 47 23 48 21 37 36 40 37 ...
## $ sex : Factor w/ 2 levels "F","M": 2 1 1 2 2 2 2 1 2 2 ...
## $ SPC : Factor w/ 7 levels "employee","middle",..: 2 2 4 6 1 6 5 2 5 5 ...
## $ Sport : Factor w/ 2 levels "Not.sportsman",..: 2 2 2 1 2 2 2 2 2 1 ...
## $ age_Q : Factor w/ 5 levels "15-24","25-34",..: 3 4 4 1 4 1 3 3 3 3 ...
## $ frequency : Factor w/ 4 levels "1/day","1 to 2/week",..: 1 1 3 1 3 1 4 2 3 3 ...
## $ escape.exoticism: Factor w/ 2 levels "escape-exoticism",..: 2 1 2 1 1 2 2 2 2 2 ...
## $ spirituality : Factor w/ 2 levels "Not.spirituality",..: 1 1 1 2 2 1 1 1 1 1 ...
## $ healthy : Factor w/ 2 levels "healthy","Not.healthy": 1 1 1 1 2 1 1 1 2 1 ...
## $ diuretic : Factor w/ 2 levels "diuretic","Not.diuretic": 2 1 1 2 1 2 2 2 2 1 ...
## $ friendliness : Factor w/ 2 levels "friendliness",..: 2 2 1 2 1 2 2 1 2 1 ...
## $ iron.absorption : Factor w/ 2 levels "iron absorption",..: 2 2 2 2 2 2 2 2 2 2 ...
## $ feminine : Factor w/ 2 levels "feminine","Not.feminine": 2 2 2 2 2 2 2 1 2 2 ...
## $ sophisticated : Factor w/ 2 levels "Not.sophisticated",..: 1 1 1 2 1 1 1 2 2 1 ...
## $ slimming : Factor w/ 2 levels "No.slimming",..: 1 1 1 1 1 1 1 1 1 1 ...
## $ exciting : Factor w/ 2 levels "exciting","No.exciting": 2 1 2 2 2 2 2 2 2 2 ...
## $ relaxing : Factor w/ 2 levels "No.relaxing",..: 1 1 2 2 2 2 2 2 2 2 ...
## $ effect.on.health: Factor w/ 2 levels "effect on health",..: 2 2 2 2 2 2 2 2 2 2 ...
dim(tea)
## [1] 300 36
keep_columns <- c("Tea", "How", "how", "sugar", "where", "lunch","pub","always")
tea_time <- select(tea, one_of(keep_columns))
str(tea_time)
## 'data.frame': 300 obs. of 8 variables:
## $ Tea : Factor w/ 3 levels "black","Earl Grey",..: 1 1 2 2 2 2 2 1 2 1 ...
## $ How : Factor w/ 4 levels "alone","lemon",..: 1 3 1 1 1 1 1 3 3 1 ...
## $ how : Factor w/ 3 levels "tea bag","tea bag+unpackaged",..: 1 1 1 1 1 1 1 1 2 2 ...
## $ sugar : Factor w/ 2 levels "No.sugar","sugar": 2 1 1 2 1 1 1 1 1 1 ...
## $ where : Factor w/ 3 levels "chain store",..: 1 1 1 1 1 1 1 1 2 2 ...
## $ lunch : Factor w/ 2 levels "lunch","Not.lunch": 2 2 2 2 2 2 2 2 2 2 ...
## $ pub : Factor w/ 2 levels "Not.pub","pub": 1 1 1 1 1 1 1 1 1 1 ...
## $ always: Factor w/ 2 levels "always","Not.always": 2 2 2 2 1 2 2 2 2 2 ...
gather(tea_time) %>% ggplot(aes(value)) + facet_wrap("key", scales = "free") + geom_bar() + theme(axis.text.x = element_text(angle = 45, hjust = 1, size = 8))
## Warning: attributes are not identical across measure variables;
## they will be dropped
mca <- MCA(tea_time, graph = TRUE)
summary(mca)
##
## Call:
## MCA(X = tea_time, graph = TRUE)
##
##
## Eigenvalues
## Dim.1 Dim.2 Dim.3 Dim.4 Dim.5 Dim.6 Dim.7
## Variance 0.212 0.204 0.173 0.147 0.141 0.127 0.120
## % of var. 13.076 12.545 10.654 9.025 8.682 7.814 7.371
## Cumulative % of var. 13.076 25.621 36.275 45.300 53.982 61.796 69.167
## Dim.8 Dim.9 Dim.10 Dim.11 Dim.12 Dim.13
## Variance 0.106 0.106 0.093 0.086 0.063 0.046
## % of var. 6.546 6.518 5.737 5.293 3.880 2.858
## Cumulative % of var. 75.714 82.232 87.969 93.262 97.142 100.000
##
## Individuals (the 10 first)
## Dim.1 ctr cos2 Dim.2 ctr cos2 Dim.3
## 1 | -0.446 0.311 0.228 | 0.229 0.086 0.060 | 0.250
## 2 | -0.406 0.259 0.131 | 0.202 0.067 0.033 | 0.656
## 3 | -0.457 0.327 0.387 | 0.143 0.033 0.038 | 0.110
## 4 | -0.547 0.469 0.538 | 0.044 0.003 0.004 | -0.210
## 5 | -0.344 0.186 0.166 | 0.049 0.004 0.003 | -0.119
## 6 | -0.457 0.327 0.387 | 0.143 0.033 0.038 | 0.110
## 7 | -0.457 0.327 0.387 | 0.143 0.033 0.038 | 0.110
## 8 | -0.406 0.259 0.131 | 0.202 0.067 0.033 | 0.656
## 9 | 0.287 0.129 0.058 | -0.451 0.333 0.145 | 0.461
## 10 | 0.439 0.303 0.147 | -0.141 0.032 0.015 | 0.835
## ctr cos2
## 1 0.120 0.072 |
## 2 0.829 0.343 |
## 3 0.023 0.022 |
## 4 0.085 0.079 |
## 5 0.027 0.020 |
## 6 0.023 0.022 |
## 7 0.023 0.022 |
## 8 0.829 0.343 |
## 9 0.410 0.151 |
## 10 1.342 0.530 |
##
## Categories (the 10 first)
## Dim.1 ctr cos2 v.test Dim.2 ctr cos2
## black | 0.279 1.132 0.026 2.764 | 0.339 1.737 0.038
## Earl Grey | -0.095 0.340 0.016 -2.202 | -0.329 4.275 0.195
## green | -0.072 0.033 0.001 -0.436 | 1.165 9.162 0.168
## alone | -0.105 0.420 0.020 -2.470 | 0.197 1.545 0.072
## lemon | 0.956 5.914 0.113 5.812 | -0.343 0.793 0.015
## milk | -0.293 1.058 0.023 -2.609 | -0.257 0.852 0.018
## other | 0.814 1.169 0.020 2.475 | -1.209 2.687 0.045
## tea bag | -0.675 15.187 0.596 -13.346 | 0.015 0.008 0.000
## tea bag+unpackaged | 0.690 8.785 0.217 8.064 | -0.673 8.692 0.206
## unpackaged | 1.385 13.536 0.261 8.842 | 1.683 20.846 0.386
## v.test Dim.3 ctr cos2 v.test
## black 3.353 | 1.098 21.454 0.394 10.860 |
## Earl Grey -7.645 | -0.433 8.714 0.338 -10.059 |
## green 7.085 | 0.072 0.041 0.001 0.437 |
## alone 4.640 | -0.042 0.083 0.003 -0.988 |
## lemon -2.084 | -0.882 6.172 0.096 -5.359 |
## milk -2.293 | 0.245 0.910 0.016 2.184 |
## other -3.675 | 2.426 12.753 0.182 7.379 |
## tea bag 0.306 | -0.106 0.459 0.015 -2.095 |
## tea bag+unpackaged -7.856 | 0.350 2.772 0.056 4.089 |
## unpackaged 10.748 | -0.414 1.482 0.023 -2.641 |
##
## Categorical variables (eta2)
## Dim.1 Dim.2 Dim.3
## Tea | 0.026 0.247 0.418 |
## How | 0.146 0.096 0.276 |
## how | 0.638 0.482 0.065 |
## sugar | 0.028 0.032 0.283 |
## where | 0.714 0.586 0.102 |
## lunch | 0.010 0.045 0.013 |
## pub | 0.100 0.117 0.097 |
## always | 0.039 0.026 0.131 |
plot.MCA(mca, invisible=c("var","quali.sup"), cex=0.7)
plot.MCA(mca, invisible=c("ind"), cex=0.7)
plot(mca, invisible=c("quali.sup"), habillage= "quali")
We can observe in the different plots the variables that define the created dimensions. We also observe the relation between the different variables. The variables that are closer in the space in the plot representation have higher correlation. For instance, in the first MCA factor map, we see how the values tea shop and unpackaged are closer to each other than to any other value. This explaine the connection that they have.